735 research outputs found

    Bug Hunting with False Negatives Revisited

    Get PDF
    Safe data abstractions are widely used for verification purposes. Positive verification results can be transferred from the abstract to the concrete system. When a property is violated in the abstract system, one still has to check whether a concrete violation scenario exists. However, even when the violation scenario is not reproducible in the concrete system (a false negative), it may still contain information on possible sources of bugs. Here, we propose a bug hunting framework based on abstract violation scenarios. We first extract a violation pattern from one abstract violation scenario. The violation pattern represents multiple abstract violation scenarios, increasing the chance that a corresponding concrete violation exists. Then, we look for a concrete violation that corresponds to the violation pattern by using constraint solving techniques. Finally, we define the class of counterexamples that we can handle and argue correctness of the proposed framework. Our method combines two formal techniques, model checking and constraint solving. Through an analysis of contracting and precise abstractions, we are able to integrate overapproximation by abstraction with concrete counterexample generation

    Optimizing the management of financial flows based on assessment of regional multiplier effects

    Full text link
    This article examines the issues of improving the effectiveness in the management of regional financial flows. As their main hypothesis, the authors provide a rationale for the argument that the management of regional financial flows must be optimized on the basis of multiplier economic effects that allow to better assess the performance of regional socio-economic policy. The article presents a multifactor model for managing the financial flows at the regional level, or the matrix of financial flows based on the principles of general economic equilibrium theory, Input—Output balancing method and methodology of national accounts system. The consolidated budgetary balance sheet of the region is presented as an important structural element of the model. A methodology has been developed for integrating the consolidated budgetary balance sheet of the region in the matrix of financial flows. By using the example of individual subjects of the Russian Federation, the authors calculated the matrix multipliers of consolidated budgetary balance sheet that allow to simulate the multiplier economic effects resulting from the impact of different types of exogenous economic factors on the development of regions, and to forecast the impact of changes in the fiscal reallocation on GRP and household income, assess the impact of external investment on the economic growth of the regions and study the effectiveness of federal tax policy at the regional level. The article demonstrates that the value of multiplier effect depends on several factors, including the external trade relations of the region, its dependence on imports, the share of value added in gross output, as well as the propensity of households to savings. The approach proposed by the authors can be used by the government authorities at different levels in the development of their strategies of socio-economic development, assessment of the extent and areas of impact made by various exogenous factors on the economy of the region, as well as in the analysis of the investment initiatives of the private sector seeking the financial support for its projects from the state. The authors propose the areas for improving the management of financial flows based on maximizing the multiplier economic effects in the short and medium term for the regions with a different level of fiscal capacity.The article has been prepared with the support of the Grant of the Russian Foundation for Humanities (project №15–02–00587)

    Petri nets with may/must semantics: Preserving properties through data refinements

    Get PDF
    Many systems used in process managements, like workflow systems, are developed in a top-down fashion, when the original design is refined at each step bringing it closer to the underlying reality. Underdefined specifications cannot however be used for verification, since both false positives and false negatives can be reported. In this paper we introduce colored Petri nets where guards can be evaluated to true, false and indefinite values, the last ones reflecting underspecification. This results in the semantics of Petri nets with may- and must-enableness and firings. In this framework we introduce property-preserving refinements that allow for verification in an early design phase. We present results on property preservation through refinements. We also apply our framework to workflow nets, introduce notions of may- and must-soundness and show that they are preserved through refinements. We shortly describe a prototype under implementation

    Abstraction and flow analysis for model checking open asynchronous systems

    Get PDF
    Formal methods, especially model checking, are an indispensable part of the software engineering process. With large software systems currently beyond the range of fully automatic verification, however, a combination of decomposition and abstraction techniques is needed. To model check components of a system, a standard approach is to close the component with an abstraction of its environment. To make it useful in practice, the closing of the component should be automatic, both for data and for control abstraction. Specifically for model checking asynchronous open systems, external input queues should be removed, as they are a potential source of a combinatorial state explosion. In this paper, we close a component synchronously by embedding the external environment directly into the system to avoid the external queues, while for the data, we use a two-valued abstraction, namely data influenced from the outside or not. This gives a more precise analysis than the one investigated in [7]. To further combat the state explosion problem, we combine this data abstraction with a static analysis to remove superfluous code fragments. The static analysis we use is reminiscent to the one presented in [7], but we use a combination of a may and a must-analysis instead of a may-analysis

    Using fairness to make abstractions work

    Get PDF
    Abstractions often introduce infinite traces which have no corresponding traces at the concrete level and can lead to the failure of the verification. Refinement does not always help to eliminate those traces. In this paper, we consider a timer abstraction that introduces a cyclic behaviour on abstract timers and we show how one can exclude cycles by imposing a strong fairness constraint on the abstract model. By employing the fact that the loop on the abstract timer is a self-loop, we render the strong fairness constraint into a weak fairness constraint and embed it into the verification algorithm. We implemented the algorithm in the DTSpin model checker and showed its efficiency on case studies. The same approach can be used for other data abstractions that introduce self-loops

    Technological characteristics of five new apple cultivars of VNIISPK breeding as raw materials for juice industry

    Get PDF
    Received: March 5th, 2022 ; Accepted: June 13th, 2022 ; Published: July 5th, 2022 ; Correspondence: [email protected] data of the technological assessment of the suitability of five new winter apple cultivars of VNIISPK breeding (‘Aleksandr Boyko’, ‘Blagodat’, ‘Vavilovskoye, Ivanovskoye’, ‘Patriot’ and ‘Prazdnichnoye’) for the production of raw materials for juice industry are presented. The main technological indicators characterizing the suitability of the cultivar for juice production were studied: the firmness of the pulp, the yield of juice, the content of soluble solids, sugars, titratable acids and catechins in comparison with the standard cultivar ‘Antonovka’. It was found that all of them were distinguished by a higher content of soluble solids and sugars in the juice than in the standard cultivar, and a lower content of titrated acids, as well as higher tasting ratings. ‘Aleksandr Boyko’ surpassed all studied cultivars in terms of such indicators as juice yield, the content of soluble solids, sugars and catechins in it, besides it was distinguished by low acidity. All studied cultivars, especially ‘Alexandr Boyko’, are promising as raw materials for the juice industry

    Individual educational trajectories: from educational supermarket to sensemaking to blended values

    Get PDF
    Bringing the idea of students` individual educational trajectories into focus, Russian universities employ an ‘educational supermarket’ model and take for granted that universities are capable of incorporating any novelty. This paper proves the opposite, i.e. universities use specific ‘filters’ to minimize external influence. New filters introduction is able to destabilize the system and make it sensitive to outside cues. This phase, with its sensemaking potential, is the most efficient for building educational trajectories. However, this phase is short, for universities tend to reduce volatility and complexity. They are ‘autopoietic’. Genuine individualization stems from a person`s understanding of value that he or she can create over the trajectory of their lives, which is above autopoietic frames of one university. This paper discusses a shift in focus – from individualization to blended value – with the central premise that value is itself a ‘blend’ of economic, environmental, social, political, and personal factors. This notion shifts the focus from a university as the center for a person`s individualization to a university as an equal stakeholder and actor that identifies and maximizes blended value

    Workflow completion patterns

    Full text link
    The most common correctness requirement for a (business) workflow is the completion requirement, imposing that, in some form, every case-instance of the workflow reaches its final state. In this paper, we define three workflow completion patterns, called the mandatory, optional and possible completion. These patterns are formalized in terms of the temporal logic CTL*, to remove ambiguities, allow for easy comparison, and have direct applicability. In contrast to the existing methods, we do not look at the control flow in isolation but include some data information as well. In this way the analysis remains tractable but gains precision. Together with our previous work on data-flow (anti-)patterns, this paper is a significant step towards a unifying framework for complete workflow verification, using the well-developed, stable, adaptable, and effective model-checking approach

    On generation of time-based label refinements

    Get PDF
    Process mining is a research field focused on the analysis of event data with the aim of extracting insights in processes. Applying process mining techniques on data from smart home environments has the potential to provide valuable insights in (un)healthy habits and to contribute to ambient assisted living solutions. Finding the right event labels to enable application of process mining techniques is however far from trivial, as simply using the triggering sensor as the label for sensor events results in uninformative models that allow for too much behavior (overgeneralizing). Refinements of sensor level event labels suggested by domain experts have shown to enable discovery of more precise and insightful process models. However, there exist no automated approach to generate refinements of event labels in the context of process mining. In this paper we propose a framework for automated generation of label refinements based on the time attribute of events. We show on a case study with real life smart home event data that behaviorally more specific, and therefore more insightful, process models can be found by using automatically generated refined labels in process discovery.Comment: Accepted at CS&P workshop 2016 Overlap in preliminaries with arXiv:1606.0725

    Analyzing control-flow and data-flow in workflow processes in a unified way

    Get PDF
    Workflow correctness properties are usually defined based on one workflow perspective only, e.g. the control-flow or the data-flow. In this paper we consider workflow correctness criteria looking at the control flow extended with the read/write/destroy information for data items. We formalize some common control-flow errors, and we introduce behavioral anti-patterns related to the handling of data. In addition to extending, refining, and classifying existing methods, our paper provides a unifying framework for complete workflow verification, using the well-known, stable, adaptable, and effective model-checking approach
    corecore